4,065 research outputs found

    Adiabatic optimization without local minima

    Full text link
    Several previous works have investigated the circumstances under which quantum adiabatic optimization algorithms can tunnel out of local energy minima that trap simulated annealing or other classical local search algorithms. Here we investigate the even more basic question of whether adiabatic optimization algorithms always succeed in polynomial time for trivial optimization problems in which there are no local energy minima other than the global minimum. Surprisingly, we find a counterexample in which the potential is a single basin on a graph, but the eigenvalue gap is exponentially small as a function of the number of vertices. In this counterexample, the ground state wavefunction consists of two "lobes" separated by a region of exponentially small amplitude. Conversely, we prove if the ground state wavefunction is single-peaked then the eigenvalue gap scales at worst as one over the square of the number of vertices.Comment: 20 pages, 1 figure. Journal versio

    Mapping State Unemployment

    Get PDF
    In this data snapshot, authors Michael Ettlinger and Jordan Hensley report the relative level of initial unemployment claims for the week ending March 28 as a share of the labor force, and the “insured unemployment” as a share of the February labor force for the week ending March 21. Hawaii, Michigan, and Pennsylvania top the list of initial unemployment claims

    Yang-Baxter operators need quantum entanglement to distinguish knots

    Full text link
    Any solution to the Yang-Baxter equation yields a family of representations of braid groups. Under certain conditions, identified by Turaev, the appropriately normalized trace of these representations yields a link invariant. Any Yang-Baxter solution can be interpreted as a two-qudit quantum gate. Here we show that if this gate is non-entangling, then the resulting invariant of knots is trivial. We thus obtain a general connection between topological entanglement and quantum entanglement, as suggested by Kauffman et al.Comment: 12 pages, 2 figure

    Employment Income Drops in More Low-Income Than High-Income Households in All States

    Get PDF
    Low-wage workers are being hit much harder in the COVID-19 economic crisis than higher wage workers. This is evident in the much greater job loss in lower wage industries than higher wage industries

    Government Spending Across the World: How the United States Compares

    Get PDF
    In this brief, authors Michael Ettlinger, Jordan Hensley, and Julia Vieira analyze how much the governments of different countries spend, and on what, to illuminate the range of fiscal policy options available and provide a basis for determining which approaches work best. They report that the United States ranks twenty-fourth in government spending as a share of GDP out of twenty-nine countries for which recent comparable data are available. The key determinant of where countries rank in overall government spending is the amount spent on social protection. The United States ranks last in spending on social protection as a share of GDP and twenty-second in per capita spending. The United States ranks at or near the top in military, health care, education, and law enforcement spending. Measuring government spending by different methods and including tax expenditures does not appear to significantly alter the conclusion that the United States is a low-tax, low-spending country relative to the other countries examined, particularly when compared to its fellow higher-income countries

    On the Convergence of Stochastic Iterative Dynamic Programming Algorithms

    Get PDF
    Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(lambda) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(lambda) and Q-learning belong
    corecore